51 research outputs found

    Realtime projective multi-texturing of pointclouds and meshes for a realistic street-view web navigation

    Get PDF
    International audienceStreet-view web applications have now gained widespread popularity. Targeting the general public, they offer ease of use, but while they allow efficient navigation from a pedestrian level, the immersive quality of such renderings is still low. The user is usually stuck at specific positions and transitions bring out artefacts, in particular parallax and aliasing. We propose a method to enhance the realism of street view navigation systems using a hybrid rendering based on realtime projective texturing on meshes and pointclouds with occlusion handling, requiring extremely minimized pre-processing steps allowing fast data update, progressive streaming (mesh-based approximation, with point cloud details) and unaltered raw data precise visualization

    A Classification-Segmentation Framework for the Detection of Individual Trees in Dense MMS Point Cloud Data Acquired in Urban Areas

    Get PDF
    International audienceIn this paper, we present a novel framework for detecting individual trees in densely sampled 3D point cloud data acquired in urban areas. Given a 3D point cloud, the objective is to assign point-wise labels that are both class-aware and instance-aware, a task that is known as instance-level segmentation. To achieve this, our framework addresses two successive steps. The first step of our framework is given by the use of geometric features for a binary point-wise semantic classification with the objective of assigning semantic class labels to irregularly distributed 3D points, whereby the labels are defined as " tree points " and " other points ". The second step of our framework is given by a semantic segmentation with the objective of separating individual trees within the " tree points ". This is achieved by applying an efficient adaptation of the mean shift algorithm and a subsequent segment-based shape analysis relying on semantic rules to only retain plausible tree segments. We demonstrate the performance of our framework on a publicly available benchmark dataset, which has been acquired with a mobile mapping system in the city of Delft in the Netherlands. This dataset contains 10.13 M labeled 3D points among which 17.6% are labeled as " tree points ". The derived results clearly reveal a semantic classification of high accuracy (up to 90.77%) and an instance-level segmentation of high plausibility, while the simplicity, applicability and efficiency of the involved methods even allow applying the complete framework on a standard laptop computer with a reasonable processing time (less than 2.5 h)

    Underground Visualization: web-app, virtual reality, ex situ and in situ augmented reality.

    Get PDF
    International audienceIn this work-in-progress and demo paper, we present an ongoing experiment of underground visualization, in the context of urban 3D geovisualization, for operational purposes in urban planning. After a preliminary project of sewers network digitization (Nguyen et al., 2018), the 3D model of sewers has been made available for web visualization and interaction. In this paper, we show that it is possible to address various use contexts, when adding the ability to visualize and interact with the sewer 3D model in virtual reality (VR) and in ex situ / in situ augmented reality (AR). These experiments will be further experimented to validate our hypotheses to favor collaboration, immersion for simulation and training, and more secured urban development

    Rendu et représentation graphiques d'informations spatio-temporelles

    Get PDF
    National audienceCet article de positionnement expose une partie des questions de recherche et travaux en cours de notre équipe en géovisualisation (équipe GEOVIS du laboratoire LaSTIG), concernant la visualisation et l'analyse visuelle d'informations spatio-temporelles sur le territoire, via l'interaction et l'immersion. Les questions classiques de la géovisualisation perdurent-quelles représentations graphiques, quelles interfaces, quelle qualité. Néanmoins, le contexte a évolué : la complexité des phénomènes et dyna-miques physiques, historiques, sociologiques et leurs interactions avec l'espace géographique, ainsi que le volume de données spatiales hétérogènes, et les besoins d'utilisateurs très variés en capacités de vision, perception et cognition, nécessitent de faire encore plus converger des domaines connexes sur la représentation graphique et l'exploration de données, pour améliorer les capacités d'analyse visuelle en géovisualisation. En particulier, nous présentons ici des questions et travaux de recherche spécifiques à l'exploration interactive des capacités de rendu et de représentation (carto)graphique de données spatio-temporelles dans le contexte de la géovisualisation. Ces travaux s'appliquent à des problématiques liées à la visua-lisation des espaces géographiques urbains et à des problématiques d'analyse de dynamiques urbaines (historique, planification), et de dynamiques géophysiques (inondations, météorologie). Ces travaux sont implémentés sur une plateforme open source de visualisation 3D

    Fast Image and LiDAR alignment based on 3D rendering in sensor topology

    Get PDF
    Mobile Mapping Systems are now commonly used in large urban acquisition campaigns. They are often equiped with LiDAR sensors and optical cameras, providing very large multimodal datasets. The fusion of both modalities serves different purposes such as point cloud colorization, geometry enhancement or object detection. However, this fusion task cannot be done directly as both modalities are only coarsely registered. This paper presents a fully automatic approach for LiDAR projection and optical image registration refinement based on LiDAR point cloud 3D renderings. First, a coarse 3D mesh is generated from the LiDAR point cloud using the sensor topology. Then, the mesh is rendered in the image domain. After that, a variational approach is used to align the rendering with the optical image. This method achieves high quality results while performing in very low computational time. Results on real data demonstrate the efficiency of the model for aligning LiDAR projections and optical images

    TerraMobilita/IQmulus Urban Point Cloud Classification Benchmark

    No full text
    International audienceThe object of the TerraMobilita/iQmulus 3D urban analysis benchmark is to evaluate the current state of the art in urban scene analysis from mobile laser scanning (MLS). A very detailed semantic tree for urban scenes is proposed (cf Figure 1). We call analysis the capacity of a method to separate the points of the scene into these categories (classification), and to separate the different objects of the same type for object classes (detection). The ground truth is produced manually in two steps using advanced editing tools developed especially for this benchmark. Base on this ground truth, the benchmark will aim at evaluating both the classification, detection and segmentation quality of the submitted results

    TerraMobilita/iQmulus urban point cloud analysis benchmark

    No full text
    International audienceThe object of the TerraMobilita/iQmulus 3D urban analysis benchmark is to evaluate the current state of the art in urban scene analysis from mobile laser scanning (MLS) at large scale. A very detailed semantic tree for urban scenes is proposed. We call analysis the capacity of a method to separate the points of the scene into these categories (classification), and to separate the different objects of the same type for object classes (detection). A very large ground truth is produced manually in two steps using advanced editing tools developed especially for this benchmark. Based on this ground truth, the benchmark aims at evaluating both the classification, detection and segmentation quality of the submitted results
    • …
    corecore